75 research outputs found

    Heuristic 3d Reconstruction Of Irregular Spaced Lidar

    Get PDF
    As more data sources have become abundantly available, an increased interest in 3D reconstruction has emerged in the image processing academic community. Applications for 3D reconstruction of urban and residential buildings consist of urban planning, network planning for mobile communication, tourism information systems, spatial analysis of air pollution and noise nuisance, microclimate investigations, and Geographical Information Systems (GISs). Previous, classical, 3D reconstruction algorithms solely utilized aerial photography. With the advent of LIDAR systems, current algorithms explore using captured LIDAR data as an additional feasible source of information for 3D reconstruction. Preprocessing techniques are proposed for the development of an autonomous 3D Reconstruction algorithm. The algorithm is designed for autonomously deriving three dimensional models of urban and residential buildings from raw LIDAR data. First, a greedy insertion triangulation algorithm, modified with a proposed noise filtering technique, triangulates the raw LIDAR data. The normal vectors of those triangles are then passed to an unsupervised clustering algorithm – Fuzzy Simplified Adaptive Resonance Theory (Fuzzy SART). Fuzzy SART returns a rough grouping of coplanar triangles. A proposed multiple regression algorithm then further refines the coplanar grouping by further removing outliers and deriving an improved planar segmentation of the raw LIDAR data. Finally, further refinement is achieved by calculating the intersection of the best fit roof planes and moving nearby points close to that intersection to exist at the intersection, resulting in straight roof ridges. The end result of the aforementioned techniques culminates in a well defined model approximating the considered building depicted by the LIDAR data

    Unsupervised Building Detection From Irregularly Spaced Lidar And Aerial Imagery

    Get PDF
    As more data sources containing 3-D information are becoming available, an increased interest in 3-D imaging has emerged. Among these is the 3-D reconstruction of buildings and other man-made structures. A necessary preprocessing step is the detection and isolation of individual buildings that subsequently can be reconstructed in 3-D using various methodologies. Applications for both building detection and reconstruction have commercial use for urban planning, network planning for mobile communication (cell phone tower placement), spatial analysis of air pollution and noise nuisances, microclimate investigations, geographical information systems, security services and change detection from areas affected by natural disasters. Building detection and reconstruction are also used in the military for automatic target recognition and in entertainment for virtual tourism. Previously proposed building detection and reconstruction algorithms solely utilized aerial imagery. With the advent of Light Detection and Ranging (LiDAR) systems providing elevation data, current algorithms explore using captured LiDAR data as an additional feasible source of information. Additional sources of information can lead to automating techniques (alleviating their need for manual user intervention) as well as increasing their capabilities and accuracy. Several building detection approaches surveyed in the open literature have fundamental weaknesses that hinder their use; such as requiring multiple data sets from different sensors, mandating certain operations to be carried out manually, and limited functionality to only being able to detect certain types of buildings. In this work, a building detection system is proposed and implemented which strives to overcome the limitations seen in existing techniques. The developed framework is flexible in that it can perform building detection from just LiDAR data (first or last return), or just nadir, color aerial imagery. If data from both LiDAR and aerial imagery are available, then the algorithm will use them both for improved accuracy. Additionally, the proposed approach does not employ severely limiting assumptions thus enabling the end user to apply the approach to a wider variety of different building types. The proposed approach is extensively tested using real data sets and it is also compared with other existing techniques. Experimental results are presented

    Innovation Strategies for Digital Assets & Wealth Management

    Get PDF
    The core pillars of the project are separated into three parts which consist of Persona/User Research, Financial Valuation, and Crypto & Wealth Ecosystems. The research looks at the role of Technology and Digital Transformation in the Wealth Management industry in an attempt to demystify concepts crypto-related concepts that in-directly / directly impact Wealth Managers. Topics covered include (1) assessing the commonalities of the competitive landscape / market structure for Registered Investment Advisors providing insight into the potential for network effects and economies of scale. (2) Evaluating the crypto & wealth ecosystem utilizing public financial filings for Envestnet (Ticker: ENV) to provide a comprehensive viewpoint from the perspective of a Wealth Manager given its potential effect on advisory fee structures as more funds shift into digital assets. (3) Reviewing relevant crypto thesis' and solution-based approaches for Financial Advisors given the conservative nature of the Wealth Management industry and other wealth management industry participants for both regulated and unregulated environments. Disclosure: Information provided about Envestnet herein is from publicly available sources. Nothing herein is either a legal interpretation or a statement of Envestnet policy. Reference to any specific product or entity does not constitute an endorsement or recommendation by Envestnet. The views expressed by the Authors are their own and this Report on the topics herein do not imply an endorsement of the findings or any particular product by Envestnet. To the extent this Report contains statements by Envestnet employees, the views and opinions expressed are those of the employees and do not necessarily reflect the view of Envestnet or any of its officials.The University of Maryland ALP team serves as an independent research body to the Wealth ecosystem to explore Innovation Strategies for Digital Assets & Wealth Management. During the consult the team conducted a comprehensive case study to identify key data points for solution development which would support the incorporation of Cryptocurrencies into ‘traditional' financial advisory segments. The research is built in conjunction with industry leader Envestnet which specializes in the B2B FinTech, Wealth & Financial Advisory segments for independent advisors and high net worth individuals. The authors presents the essentials of the Wealth Management industry from the perspective of Registered Independent Advisor (RIA) firms that provide financial advice and planning for clients

    A mini-corer for precision sampling of the water-sediment interface in subglacial lakes and other remote aqueous environments:Sampling low energy water sediment interfaces

    Get PDF
    Recent interest in Antarctic subglacial lakes has seen the development of bespoke systems for sampling them. These systems are considered pristine environments potentially harboring undisturbed sedimentary sequences and ecosystems adapted to these cold oligotrophic environments in the absence of sunlight. The water/sediment interface is considered a prime location for the detection of microbial life and so is of particular interest. This article describes the development of a small corer to capture and retain a short core that includes the water/sediment interface specifically to address the question of whether life exists in these lakes. This apparatus was developed as part of the UK led project to access, measure, and sample subglacial Lake Ellsworth. In addition to addressing the constraints of coring in this difficult environment, the results of subsequent testing suggest that this corer can be applied to sampling sediments in other environments and would be particularly useful in low energy environments when the water‐sediment interface is indistinct or unconsolidated

    Readout of a quantum processor with high dynamic range Josephson parametric amplifiers

    Full text link
    We demonstrate a high dynamic range Josephson parametric amplifier (JPA) in which the active nonlinear element is implemented using an array of rf-SQUIDs. The device is matched to the 50 Ω\Omega environment with a Klopfenstein-taper impedance transformer and achieves a bandwidth of 250-300 MHz, with input saturation powers up to -95 dBm at 20 dB gain. A 54-qubit Sycamore processor was used to benchmark these devices, providing a calibration for readout power, an estimate of amplifier added noise, and a platform for comparison against standard impedance matched parametric amplifiers with a single dc-SQUID. We find that the high power rf-SQUID array design has no adverse effect on system noise, readout fidelity, or qubit dephasing, and we estimate an upper bound on amplifier added noise at 1.6 times the quantum limit. Lastly, amplifiers with this design show no degradation in readout fidelity due to gain compression, which can occur in multi-tone multiplexed readout with traditional JPAs.Comment: 9 pages, 8 figure

    Measurement-Induced State Transitions in a Superconducting Qubit: Within the Rotating Wave Approximation

    Full text link
    Superconducting qubits typically use a dispersive readout scheme, where a resonator is coupled to a qubit such that its frequency is qubit-state dependent. Measurement is performed by driving the resonator, where the transmitted resonator field yields information about the resonator frequency and thus the qubit state. Ideally, we could use arbitrarily strong resonator drives to achieve a target signal-to-noise ratio in the shortest possible time. However, experiments have shown that when the average resonator photon number exceeds a certain threshold, the qubit is excited out of its computational subspace, which we refer to as a measurement-induced state transition. These transitions degrade readout fidelity, and constitute leakage which precludes further operation of the qubit in, for example, error correction. Here we study these transitions using a transmon qubit by experimentally measuring their dependence on qubit frequency, average photon number, and qubit state, in the regime where the resonator frequency is lower than the qubit frequency. We observe signatures of resonant transitions between levels in the coupled qubit-resonator system that exhibit noisy behavior when measured repeatedly in time. We provide a semi-classical model of these transitions based on the rotating wave approximation and use it to predict the onset of state transitions in our experiments. Our results suggest the transmon is excited to levels near the top of its cosine potential following a state transition, where the charge dispersion of higher transmon levels explains the observed noisy behavior of state transitions. Moreover, occupation in these higher energy levels poses a major challenge for fast qubit reset

    Overcoming leakage in scalable quantum error correction

    Full text link
    Leakage of quantum information out of computational states into higher energy states represents a major challenge in the pursuit of quantum error correction (QEC). In a QEC circuit, leakage builds over time and spreads through multi-qubit interactions. This leads to correlated errors that degrade the exponential suppression of logical error with scale, challenging the feasibility of QEC as a path towards fault-tolerant quantum computation. Here, we demonstrate the execution of a distance-3 surface code and distance-21 bit-flip code on a Sycamore quantum processor where leakage is removed from all qubits in each cycle. This shortens the lifetime of leakage and curtails its ability to spread and induce correlated errors. We report a ten-fold reduction in steady-state leakage population on the data qubits encoding the logical state and an average leakage population of less than 1×1031 \times 10^{-3} throughout the entire device. The leakage removal process itself efficiently returns leakage population back to the computational basis, and adding it to a code circuit prevents leakage from inducing correlated error across cycles, restoring a fundamental assumption of QEC. With this demonstration that leakage can be contained, we resolve a key challenge for practical QEC at scale.Comment: Main text: 7 pages, 5 figure

    Measurement-induced entanglement and teleportation on a noisy quantum processor

    Full text link
    Measurement has a special role in quantum theory: by collapsing the wavefunction it can enable phenomena such as teleportation and thereby alter the "arrow of time" that constrains unitary evolution. When integrated in many-body dynamics, measurements can lead to emergent patterns of quantum information in space-time that go beyond established paradigms for characterizing phases, either in or out of equilibrium. On present-day NISQ processors, the experimental realization of this physics is challenging due to noise, hardware limitations, and the stochastic nature of quantum measurement. Here we address each of these experimental challenges and investigate measurement-induced quantum information phases on up to 70 superconducting qubits. By leveraging the interchangeability of space and time, we use a duality mapping, to avoid mid-circuit measurement and access different manifestations of the underlying phases -- from entanglement scaling to measurement-induced teleportation -- in a unified way. We obtain finite-size signatures of a phase transition with a decoding protocol that correlates the experimental measurement record with classical simulation data. The phases display sharply different sensitivity to noise, which we exploit to turn an inherent hardware limitation into a useful diagnostic. Our work demonstrates an approach to realize measurement-induced physics at scales that are at the limits of current NISQ processors

    Non-Abelian braiding of graph vertices in a superconducting processor

    Full text link
    Indistinguishability of particles is a fundamental principle of quantum mechanics. For all elementary and quasiparticles observed to date - including fermions, bosons, and Abelian anyons - this principle guarantees that the braiding of identical particles leaves the system unchanged. However, in two spatial dimensions, an intriguing possibility exists: braiding of non-Abelian anyons causes rotations in a space of topologically degenerate wavefunctions. Hence, it can change the observables of the system without violating the principle of indistinguishability. Despite the well developed mathematical description of non-Abelian anyons and numerous theoretical proposals, the experimental observation of their exchange statistics has remained elusive for decades. Controllable many-body quantum states generated on quantum processors offer another path for exploring these fundamental phenomena. While efforts on conventional solid-state platforms typically involve Hamiltonian dynamics of quasi-particles, superconducting quantum processors allow for directly manipulating the many-body wavefunction via unitary gates. Building on predictions that stabilizer codes can host projective non-Abelian Ising anyons, we implement a generalized stabilizer code and unitary protocol to create and braid them. This allows us to experimentally verify the fusion rules of the anyons and braid them to realize their statistics. We then study the prospect of employing the anyons for quantum computation and utilize braiding to create an entangled state of anyons encoding three logical qubits. Our work provides new insights about non-Abelian braiding and - through the future inclusion of error correction to achieve topological protection - could open a path toward fault-tolerant quantum computing

    Suppressing quantum errors by scaling a surface code logical qubit

    Full text link
    Practical quantum computing will require error rates that are well below what is achievable with physical qubits. Quantum error correction offers a path to algorithmically-relevant error rates by encoding logical qubits within many physical qubits, where increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low in order for logical performance to improve with increasing code size. Here, we report the measurement of logical qubit performance scaling across multiple code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, both in terms of logical error probability over 25 cycles and logical error per cycle (2.914%±0.016%2.914\%\pm 0.016\% compared to 3.028%±0.023%3.028\%\pm 0.023\%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7×1061.7\times10^{-6} logical error per round floor set by a single high-energy event (1.6×1071.6\times10^{-7} when excluding this event). We are able to accurately model our experiment, and from this model we can extract error budgets that highlight the biggest challenges for future systems. These results mark the first experimental demonstration where quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.Comment: Main text: 6 pages, 4 figures. v2: Update author list, references, Fig. S12, Table I
    corecore